home *** CD-ROM | disk | FTP | other *** search
- Path: status.gen.nz!codewrk!amiga3k!tparker
- From: tparker@amiga3k.codeworks.gen.nz (Tom Parker)
- Newsgroups: comp.sys.amiga.hardware
- Subject: Re: FWD: Fate of 68080
- Date: 8 Feb 96 22:59:59 +1300
- Message-ID: <3246.6612T1094T1839@codeworks.gen.nz>
- Reply-To: tparker@codeworks.gen.nz
- References: <4d3c27$n6c@rs18.hrz.th-darmstadt.de> <1141.6593T1100T790@norconnect.no> <4e64h0$q49@kocrsv08.delcoelect.com>
- Organization: Not an Organization
- X-Newsreader: THOR 2.22 (Amiga;UUCP) *UNREGISTERED*
-
-
- Jeffrey William Davis <c23jwd@kocrsv01.delcoelect.com> wrote:
-
- >In article <1141.6593T1100T790@norconnect.no>,
- >Kenneth C. Nilsen <kenneth@norconnect.no> wrote:
- >>>The current/near future top-of-the-line is the PPC 604@150 MHz.
- >>>The 300 MHz was foreseen for the 64-bit PPC 620, but the situation about
- >>>this beast is unclear since the projected performance gain is not enough
- >>>these days. Instead a design rework will be done, maybe they call it 630 or
- >>>so. Anyway the race for MHz sounds silly to me, since the memory can't keep
- >>>pace with it, except for large (and expensive) caches. 700 MHz is science
- >>>fiction I'd say.
- >>
- >>Yeah, you got a point. But wouldn't it be possible to write data parallell
- >>to memory, I mean, using double memory bus writes so that in principle you
- >>can write not 1 byte, but 2 bytes simultainously or even 4 bytes etc. (hope
- >>you get the picture) ?
-
- >This 'principle' which you loosely describe is already commonly done, and
- >64, 128bits (16 bytes) or more are written and read simultaneously.
- >Writing data has never really been the problem, it's READING it that
- >causes a bottleneck that is tough to get around.
-
- >When writing, the data can be handed to a cache that will write it whenever
- >it has the time. If a read is attempted on that data while it is still
- >in the cache, you simply deliver (read) the cache value. Granted, there
- >are numerous ways to get data into the cache (including reads) but that's
- >not important here.
-
- >When reading, the processor expects the data NOW. There is a limited
- >amount of time between when the processor is able to give an address and
- >when it expects the data to be ready. If you take longer than this to
- >retrieve the data, the processor has to wait; hence, wait-states.
-
- >With RAM technology rapidly falling behind processor speed, we need to
- >somehow 'predict' which piece of data the processor will want and (at
- >the very least) begin reading it before the processor even asks for it.
- >Then place it in a device (cache) which can deliver it to the processor
- >more quickly. This is where the 'expensive' cache comes in.
-
- >CPUs having asynchronous address and data busses make it easier to
- >manage the high MHz, since it takes less time to latch an address than
- >it does to complete a R/W cycle on the data bus. This allows you to
- >grab multiple addresses and begin working on them simultaneously while
- >the data busses are saturated; continually trying to keep the data bus
- >from ever having to wait.
-
- >Unfortunately, the only perfect prediction would be one that could
- >deliver any piece of memory on demand as fast as the processor can
- >accept it - hence the < 1ns DRAM @ 700MHz! Not likely to happen in
- >the near future. The further the CPU MHz fly past the current RAM
- >technology, the more rediculous it becomes.
-
- >--
- >=======================================================================
- >Jeffrey W. Davis (317)451-0503 Domain: c23jwd@eng.delcoelect.com
- >Software Engineer UUCP: deaes!c23jwd
- >Delco Electronics Corporation GM: 8-322-0503 Mail: CT40A
-
-
- --
- Tom Parker - tparker@codeworks.gen.nz
- - 3:772/235.9@Fidonet
- - 41:649/235.9@Amiganet
-
-